这项工作为时间延迟系统的安全关键控制提供了一个理论框架。控制屏障功能的理论可为无延迟系统提供正式安全保证,扩展到具有状态延迟的系统。引入了控制屏障功能的概念,以实现正式的安全保证,该概念通过在无限尺寸状态空间中定义的安全集的向前不变性。所提出的框架能够在动态和安全状态下处理多个延迟和分布式延迟,并对可证明安全性的控制输入提供了仿射约束。该约束可以纳入优化问题,以合成最佳和可证明的安全控制器。该方法的适用性通过数值仿真示例证明。
translated by 谷歌翻译
具有安全行为的赋予非线性系统在现代控制中越来越重要。对于必须在动态变化的环境中安全运行的现实生活控制系统,此任务尤其具有挑战性。本文通过建立环境控制障碍功能(ECBFS)的概念,在动态环境中开发了一种安全关键控制框架。即使在输入延迟存在下,该框架也能够保证安全性,通过占系统延迟响应期间环境的演变。潜在的控制合成依赖于预测系统的未来状态和延迟间隔通过延迟间隔,具有稳健的安全保证预测误差。通过简单的自适应巡航控制问题和更复杂的机器人应用在SEGWAY平台上证明了所提出的方法的功效。
translated by 谷歌翻译
域对抗训练无处不在地实现不变表示,并广泛用于各种域适应任务。近来,融合到平滑最佳的方法已显示出对分类等监督学习任务的改进的概括。在这项工作中,我们分析了增强配方对域对抗训练的影响,其目的是任务损失(例如分类,回归等)和对抗性术语的组合。我们发现,相对于(W.R.T.)任务损失融合了平滑的最小值,可以稳定对抗性训练,从而在目标域上获得更好的性能。与任务损失相反,我们的分析表明,融合到平滑的最小W.R.T.对抗损失导致目标结构域的次级概括。基于分析,我们介绍了平滑的域对抗训练(SDAT)程序,该程序有效地增强了现有域对抗方法的性能,以进行分类和对象检测任务。我们的分析还提供了对社区中亚当(Adam)对域名对抗训练的广泛使用的洞察力。
translated by 谷歌翻译
基于快速的神经形态的视觉传感器(动态视觉传感器,DVS)可以与基于较慢的帧的传感器组合,以实现比使用例如固定运动近似的传统方法更高质量的帧间内插。光流。在这项工作中,我们展示了一个新的高级事件模拟器,可以产生由相机钻机录制的现实场景,该仪器具有位于固定偏移的任意数量的传感器。它包括具有现实图像质量降低效果的新型可配置帧的图像传感器模型,以及具有更精确的特性的扩展DVS模型。我们使用我们的模拟器培训一个新的重建模型,专为高FPS视频的端到端重建而设计。与以前发表的方法不同,我们的方法不需要帧和DVS相机具有相同的光学,位置或相机分辨率。它还不限于物体与传感器的固定距离。我们表明我们的模拟器生成的数据可用于训练我们的新模型,导致在与最先进的公共数据集上的公共数据集中的重建图像。我们还向传感器展示了真实传感器记录的数据。
translated by 谷歌翻译
解释在人类学习中发挥着相当大的作用,特别是在仍然在形成抽象的主要挑战,以及了解世界的关系和因果结构的地区。在这里,我们探索强化学习代理人是否同样可以从解释中受益。我们概述了一系列关系任务,涉及选择一个在一个集合中奇数一个的对象(即,沿许多可能的特征尺寸之一的唯一)。奇数一张任务要求代理在一组对象中的多维关系上推理。我们展示了代理商不会仅从奖励中学习这些任务,但是当它们也培训以生成语言解释对象属性或选择正确或不正确时,实现> 90%的性能。在进一步的实验中,我们展示了预测的解释如何使代理能够从模糊,因果困难的训练中适当地推广,甚至可以学习执行实验干预以识别因果结构。我们表明解释有助于克服代理人来解决简单特征的趋势,并探讨解释的哪些方面使它们成为最有益的。我们的结果表明,从解释中学习是一种强大的原则,可以为培训更强大和一般机器学习系统提供有希望的道路。
translated by 谷歌翻译
Disentangement是代表学习的有用财产,其提高了种子自动编码器(VAE),生成对抗模型等变形式自动编码器(VAE),生成的对抗模型及其许多变体的可解释性。通常在这种模型中,脱离性能的增加是具有发电质量的交易。在潜空间模型的背景下,这项工作提出了一种表示学习框架,通过鼓励正交的变化方向明确地促进解剖。所提出的目标是自动编码器错误项的总和以及特征空间中的主成分分析重建错误。这具有对具有在Stiefel歧管上的特征向量矩阵的限制内核机器的解释。我们的分析表明,这种结构通过将潜在空间中的主路线与数据空间的正交变化的方向匹配来促进解剖。在交替的最小化方案中,我们使用Cayley ADAM算法 - Stiefel歧管的随机优化方法以及ADAM优化器。我们的理论讨论和各种实验表明,拟议的模型在代质量和解除戒备的代表学习方面提高了许多VAE变体。
translated by 谷歌翻译
The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam, and (2) accelerated schemes, such as heavy-ball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of "fast weights" generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost. We empirically demonstrate Lookahead can significantly improve the performance of SGD and Adam, even with their default hyperparameter settings on ImageNet, CIFAR-10/100, neural machine translation, and Penn Treebank.
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Modelling and forecasting real-life human behaviour using online social media is an active endeavour of interest in politics, government, academia, and industry. Since its creation in 2006, Twitter has been proposed as a potential laboratory that could be used to gauge and predict social behaviour. During the last decade, the user base of Twitter has been growing and becoming more representative of the general population. Here we analyse this user base in the context of the 2021 Mexican Legislative Election. To do so, we use a dataset of 15 million election-related tweets in the six months preceding election day. We explore different election models that assign political preference to either the ruling parties or the opposition. We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods. These results demonstrate that analysis of public online data can outperform conventional polling methods, and that political analysis and general forecasting would likely benefit from incorporating such data in the immediate future. Moreover, the same Twitter dataset with geographical attributes is positively correlated with results from official census data on population and internet usage in Mexico. These findings suggest that we have reached a period in time when online activity, appropriately curated, can provide an accurate representation of offline behaviour.
translated by 谷歌翻译